Databricks Templatized Data Integration Jobs

You can use Databricks for data integration in a data pipeline in the Lazsa Platform. With templates available for different combinations of source and target nodes using Databricks for data integration, creating an integration job can be achieved within a few clicks. You can perform append or overwrite functions on the data and create data partitions for optimizing data querying. After performing the specified operation on the data, you can store rejected records at a specific location .

Complete the following steps to create a Databricks data integration job:

 

Related Topics Link IconRecommended Topics What's next? Data Integration using Amazon AppFlow